MMMMPPPPIIII - Introduction to the Message Passing Interface (MPI)
DDDDEEEESSSSCCCCRRRRIIIIPPPPTTTTIIIIOOOONNNN
The Message Passing Interface (MPI) is a component of the Message Passing
Toolkit (MPT), which is a software package that supports parallel
programming across a network of computer systems through a technique
known as message passing. The goal of MPI, simply stated, is to develop
a widely used standard for writing message-passing programs. As such, the
interface establishes a practical, portable, efficient, and flexible
standard for message passing.
This MPI implementation supports the MPI 1.2 standard, as documented by
the MPI Forum in the spring 1997 release of _M_P_I: _A _M_e_s_s_a_g_e _P_a_s_s_i_n_g
_I_n_t_e_r_f_a_c_e _S_t_a_n_d_a_r_d. In addition, certain MPI-2 features are also
supported. In designing MPI, the MPI Forum sought to make use of the
most attractive features of a number of existing message passing systems,
rather than selecting one of them and adopting it as the standard. Thus,
MPI has been strongly influenced by work at the IBM T. J. Watson Research
Center, Intel's NX/2, Express, nCUBE's Vertex, p4, and PARMACS. Other
important contributions have come from Zipcode, Chimp, PVM, Chameleon,
and PICL.
MPI requires the presence of an Array Services daemon (aaaarrrrrrrraaaayyyydddd) on each
host that is to run MPI processes. In a single-host environment, no
system administration effort should be required beyond installing and
activating aaaarrrrrrrraaaayyyydddd. However, users wishing to run MPI applications across
multiple hosts will need to ensure that those hosts are properly
configured into an array. For more information about Array Services, see
the aaaarrrrrrrraaaayyyydddd(1M), aaaarrrrrrrraaaayyyydddd....ccccoooonnnnffff(4), and aaaarrrrrrrraaaayyyy____sssseeeerrrrvvvviiiicccceeeessss(5) man pages.
When running across multiple hosts, users must set up their ....rrrrhhhhoooossssttttssss files
to enable remote logins. Note that MPI does not use rrrrsssshhhh, so it is not
necessary that rrrrsssshhhhdddd be running on security-sensitive systems; the ....rrrrhhhhoooossssttttssss
file was simply chosen to eliminate the need to learn yet another
mechanism for enabling remote logins.
Other sources of MPI information are as follows:
* Man pages for MPI library functions
* A copy of the MPI standard as PostScript or hypertext on the World
Normally, mmmmppppiiiirrrruuuunnnn line-buffers output received from the MPI processes
on both the ssssttttddddoooouuuutttt and ssssttttddddeeeerrrrrrrr standard IO streams. This prevents
lines of text from different processes from possibly being merged
into one line, and allows use of the mmmmppppiiiirrrruuuunnnn ----pppprrrreeeeffffiiiixxxx option.
Of course, there is a limit to the amount of buffer space that
mmmmppppiiiirrrruuuunnnn has available (currently, about 8,100 characters can appear
between new line characters per stream per process). If more
characters are emitted before a new line character, the MPI program
will abort with an error message.
Setting the MMMMPPPPIIII____UUUUNNNNBBBBUUUUFFFFFFFFEEEERRRREEEEDDDD____SSSSTTTTDDDDIIIIOOOO environment variable disables this
buffering. This is useful, for example, when a program's rank 0
emits a series of periods over time to indicate progress of the
program. With buffering, the entire line of periods will be output
only when the new line character is seen. Without buffering, each
period will be immediately displayed as soon as mmmmppppiiiirrrruuuunnnn receives it
from the MPI program. (Note that the MPI program still needs to
call fffffffflllluuuusssshhhh(3) or FFFFLLLLUUUUSSSSHHHH((((111100001111)))) to flush the ssssttttddddoooouuuutttt buffer from the
application code.)
Additionally, setting MMMMPPPPIIII____UUUUNNNNBBBBUUUUFFFFFFFFEEEERRRREEEEDDDD____SSSSTTTTDDDDIIIIOOOO allows an MPI program
that emits very long output lines to execute correctly.
NOTE: If MMMMPPPPIIII____UUUUNNNNBBBBUUUUFFFFFFFFEEEERRRREEEEDDDD____SSSSTTTTDDDDIIIIOOOO is set, the mmmmppppiiiirrrruuuunnnn ----pppprrrreeeeffffiiiixxxx option is
ignored.
Default: Not set
MMMMPPPPIIII____UUUUSSSSEEEE____GGGGMMMM (IRIX systems only)
Requires the MPI library to use the Myrinet (GM protocol) OS bypass
driver as the interconnect when running across multiple hosts or
running with multiple binaries. If a GM connection cannot be
established among all hosts in the MPI job, the job is terminated.
For more information, see the section titled "Default Interconnect
Selection."
Default: Not set
MMMMPPPPIIII____UUUUSSSSEEEE____GGGGSSSSNNNN (IRIX 6.5.12 systems or later)
Requires the MPI library to use the GSN (ST protocol) OS bypass
driver as the interconnect when running across multiple hosts or
running with multiple binaries. If a GSN connection cannot be
established among all hosts in the MPI job, the job is terminated.
GSN imposes a limit of one MPI process using GSN per CPU on a
system. For example, on a 128-CPU system, you can run multiple MPI
jobs, as long as the total number of MPI processes using the GSN
bypass does not exceed 128.
PPPPaaaaggggeeee 22220000
MMMMPPPPIIII((((1111)))) MMMMPPPPIIII((((1111))))
Once the maximum allowed MPI processes using GSN is reached,
subsequent MPI jobs return an error to the user output, as in the